Added xt::convolve#2507
Added xt::convolve#2507JohanMabille merged 1 commit intoxtensor-stack:masterfrom spectre-ns:convolve
Conversation
|
I noted where the code could be improved with new constexpr functionality released in C++17 I think this library strives for C++14 or newer compatibility correct? |
JohanMabille
left a comment
There was a problem hiding this comment.
Thanks for tackling this!
| * should v be longer than a. | ||
| */ | ||
| template <class E1, class E2, class E3> | ||
| inline auto convolve(E1&& a, E2&& v, E3&& mode) |
There was a problem hiding this comment.
E3 being the mode, it can be passed by value.
| auto temp = a; | ||
| a = v; | ||
| v = temp; | ||
| } |
There was a problem hiding this comment.
A mean to avoid copy is to have an additional function performing the implementation that is called by this one:
namespace detail
{
template <class E1, class E2, class E3>
inline auto convolve_impl(E1&& e1, E2&& e2, E3 mode)
{
// Implementation with no checks
}
}
template <class E1, class E2, class E3>
inline auto convolve(E1&& a, E2&& b, E3 mode)
{
// Dimensions and shape checks
// ...
bool cond = a.shape(0) < v.shape(0);
return cond ? detail::convole_impl(std::forward<E2>(v), std::forward<E1>(a), mode)
: detail::convole_impl(std::forward<E1>(a), std::forward<E2>(v), mode)
}| namespace conv_mode | ||
| { | ||
| struct valid{}; | ||
| struct full {}; |
There was a problem hiding this comment.
Is that intentional to not implement the same mode?
There was a problem hiding this comment.
Yeah I figured I could push that implementation off to another pull request because it can be implemented by trimming the full output.
| v = temp; | ||
| } | ||
|
|
||
| //this could benefit from C++17 if constexpr |
There was a problem hiding this comment.
This can be achieved with tag dispatching (although I'm pretty sure the compiler can optimize the current implementation)
template <class E1, class E2, class E3>
inline auto convolve_impl(E1&& e1, E2&& e2, E3 mode)
{
XTENSOR_THROW(...);
}
template <class E1, class E2>
inline auto convolve_impl(E1&& e1, E2&& e2, conv_mode::full)
{
// ...
}
template <class E1, class E2>
inline auto convolve_impl(E1&& e1, E2&& e2, conv_mode::valid)
{
//....
}There was a problem hiding this comment.
Done! Didn't implement the generic case as it would be better to have compile time error over runtime.
| size_t const nv = v.shape(0); | ||
| size_t const n = na + nv - 1; | ||
| xt::xarray<value_type> out = xt::zeros<value_type>({ n }); | ||
| for (size_t i = 0; i < n; ++i) { |
There was a problem hiding this comment.
nitpicking: an opening curly brace goes on a new line
| /* | ||
| * convolution mode placeholders | ||
| */ | ||
| namespace conv_mode |
There was a problem hiding this comment.
I would prefer the name convolve_mode I think.
| * @param a 1D expression | ||
| * @param v 1D convolutional kernel |
There was a problem hiding this comment.
Very minor: This is a bit different than NumPy's documentation. In principle that is ok, it's just that one could argue that either of the arguments is the kernel of the other. So naming one 'kernel' might confuse?
There was a problem hiding this comment.
This is a non-issue no because I have removed the copy will make it symmetrical
| * | ||
| * @param a 1D expression | ||
| * @param v 1D convolutional kernel | ||
| * @param mode placeholder from conv_mode namespace |
There was a problem hiding this comment.
| * @param mode placeholder from conv_mode namespace | |
| * @param mode placeholder Select algorithm from #conv_mode |
| } | ||
|
|
||
| /* | ||
| * convolution mode placeholders |
There was a problem hiding this comment.
A brief description is on order I think. Also you could point out that these modes are the same as for NumPy.
| { | ||
| using value_type = typename std::decay<E1>::type::value_type; | ||
|
|
||
| if (a.shape().size() != 1 || v.shape().size() != 1) |
There was a problem hiding this comment.
| if (a.shape().size() != 1 || v.shape().size() != 1) | |
| if (a.dimension() != 1 || v.dimension() != 1) |
| size_t const jmn = (i >= nv - 1) ? i - (nv - 1) : 0; | ||
| size_t const jmx = (i < na - 1) ? i : na - 1; | ||
| for (size_t j = jmn; j <= jmx; ++j) { | ||
| out[i] += (a(j) * v(i - j)); |
There was a problem hiding this comment.
| out[i] += (a(j) * v(i - j)); | |
| out(i) += a(j) * v(i - j); |
(I don't think the braces are needed there
| size_t const na = a.shape(0); | ||
| size_t const nv = v.shape(0); | ||
| int const n = std::max(na, nv) - std::min(na, nv) + 1; | ||
| xt::xarray<value_type> out = xt::zeros<value_type>({ n }); |
| xt::xarray<value_type> out = xt::zeros<value_type>({ n }); | ||
| for (size_t i = 0; i < n; ++i) { | ||
| for (int j(v.shape(0) - 1), k(i); j >= 0; --j) { | ||
| out[i] += a(j) * v(k); |
There was a problem hiding this comment.
| out[i] += a(j) * v(k); | |
| out(i) += a(j) * v(k); |
| int const n = std::max(na, nv) - std::min(na, nv) + 1; | ||
| xt::xarray<value_type> out = xt::zeros<value_type>({ n }); | ||
| for (size_t i = 0; i < n; ++i) { | ||
| for (int j(v.shape(0) - 1), k(i); j >= 0; --j) { |
There was a problem hiding this comment.
| for (int j(v.shape(0) - 1), k(i); j >= 0; --j) { | |
| for (int j = v.shape(0) - 1, k = i; j >= 0; --j) { |
no?
Also I think that the int can be avoided, e.g. https://stackoverflow.com/a/23244694/2646505
But most importantly, why is it even relevant to loop backward?
|
|
||
| TEST(xmath, convolve_full) | ||
| { | ||
| xt::xarray<double> x = { 1.0, 1.0, 1.0 }; |
There was a problem hiding this comment.
It be'd good to add another test which does not only involve unity inputs
tdegeus
left a comment
There was a problem hiding this comment.
Thanks! Did not have time to check the conditions in detail
|
@tdegeus I accepted all your suggestions! Thanks! |
tdegeus
left a comment
There was a problem hiding this comment.
Nitpicking. Other then that things look good to me, thanks! A side note: I'm surprised that NumPy does not support a periodic version. True, one could (and probably should) use Fourier for that. But still
I think a periodic convolution in the time domain would simply be a wrapped convolution? But yeah Fourier does the wrapping implicitly so best to do that. Scipy probably offers that functionality. |
|
Final question. Could you squash your commits? We prefer here to not have GitHub do that, such that we have an easier way to generate a changeling (if I remember well, I think that nowadays GitHub could do it, still in the mean-time...). Thanks! |
I definitely will do that. When I submitted there were only two commits. I appreciate the thorough review! |
|
This is embarrassing... I messed hopefully I didn't break the pr... |
|
Still looks ok, the merge commit was not necessary I guess. |
Fixed it |
|
Awesome! Thanks a lot! |
Checklist
Description
Added a 1D convolution function with a similar API as numpy and provide a couple small test to verify it's functionality.
Fixes #1928